Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Optical sensing, imaging, and detection in degraded and turbid environments is a challenging problem and there are many applications. Applications include oceanography, underwater communication, imaging in fog, low light, occlusion, autonomous navigation, security, defense, surveillance, etc. Conventional sensing and imaging systems are not capable of addressing these challenges and thus dedicated hardware and algorithms are needed. Sensing and imaging in degraded environments causes light scattering and absorption which adversely affect image quality and lowers signal to noise ratio, and compromising the system performance. This Keynote Address presents an overview of multi-dimensional optical sensing and imaging systems and dedicated algorithms designed for applications in degraded environments including operation in turbid water.more » « lessFree, publicly-accessible full text available June 1, 2027
-
In this paper, we present a polarimetric image restoration approach that aims to recover the Stokes parameters and the degree of linear polarization from their corresponding degraded counterparts. The Stokes parameters and the degree of linear polarization are affected due to the degradations present in partial occlusion or turbid media, such as scattering, attenuation, and turbid water. The polarimetric image restoration with corresponding Mueller matrix estimation is performed using polarization-informed deep learning and 3D Integral imaging. An unsupervised image-to-image translation (UNIT) framework is utilized to obtain clean Stokes parameters from the degraded ones. Additionally, a multi-output convolutional neural network (CNN) based branch is used to predict the Mueller matrix estimate along with an estimate of the corresponding residue. The degree of linear polarization with the Mueller matrix estimate generates information regarding the characteristics of the underlying transmission media and the object under consideration. The approach has been evaluated under different environmentally degraded conditions, such as various levels of turbidity and partial occlusion. The 3D integral imaging reduces the effects of degradations in a turbid medium. The performance comparison between 3D and 2D imaging in varying scene conditions is provided. Experimental results suggest that the proposed approach is promising under the scene degradations considered. To the best of our knowledge, this is the first report on polarization-informed deep learning in 3D imaging, which attempts to recover the polarimetric information along with the corresponding Mueller matrix estimate in a degraded environment.more » « less
-
It is generally assumed that oceanic effects, such as absorption, scattering, and turbulence, deteriorate underwater optical imaging and/or signal detection. In this paper, we present an interesting observation that slight turbidity may actually improve the performance of underwater optical imaging in the presence of occlusion. We have carried out simulations and optical experiments in underwater degraded environments to investigate this hypothesis. For simulation, the Monte Carlo method was used to analyze the influence of imaging performance under varying turbidity and occlusion conditions. Additionally, optical experiments were conducted under various turbid and partially occluded environments. We considered the effects of different parameters such as varying turbidity levels, severity of partial occlusion, number of photons, propagation distances, and imaging modality. The simulation results we performed suggest that, regardless of the variation of the imaging system and degradation parameters, slight turbidity may improve underwater imaging performance in occlusion. The optical experimental results are also in agreement with the simulation results that slightly increasing the turbidity levels may boost the image quality in the scenarios we considered. To the best of our knowledge, this is the first report to theoretically analyze and experimentally validate the phenomenon that turbidity may improve underwater imaging performance in certain degraded environments.more » « less
-
Free, publicly-accessible full text available May 1, 2026
-
In this paper, we propose a procedure to analyze lensless single random phase encoding (SRPE) systems to assess their robustness to variations in image sensor pixel size as the input signal frequency is varied. We use wave propagation to estimate the maximum pixel size to capture lensless SRPE intensity patterns such that an input signal frequency can be captured accurately. Lensless SRPE systems are contrived by placing a diffuser in front of an image sensor such that the optical field coming from an object can be modulated before its intensity signature is captured at the image sensor. Since diffuser surfaces contain very fine features, the captured intensity patterns always contain high spatial frequencies regardless of the input frequencies. Hence, a conventional Nyquist-criterion-based treatment of this problem would not give us a meaningful characterization. We propose a theoretical estimate on the upper limit of the image sensor pixel size such that the variations in the input signal are adequately captured in the sensor pixels. A numerical simulation of lensless SRPE systems using angular spectrum propagation and mutual information verifies our theoretical analysis. The simulation estimate of the sampling criterion matches very closely with our proposed theoretical estimate. We provide a closed-form estimate for the maximum sensor pixel size as a function of input frequency and system parameters such that an input signal frequency can be captured accurately, making it possible to optimize general-purpose SRPE systems. Our results show that lensless SRPE systems have a much greater robustness to sensor pixel size compared with lens based systems, which makes SRPE useful for exotic imagers when pixel size is large. To the best of our knowledge, this is the first report to investigate sampling of lensless SRPE systems as a function of input image frequency and physical parameters of the system to estimate the maximum image sensor pixel size.more » « less
-
Image restoration aims to recover a clean image given a noisy image. It has long been a topic of interest for researchers in imaging, optical science and computer vision. As the imaging environment becomes more and more deteriorated, the problem becomes more challenging. Several computational approaches, ranging from statistical to deep learning, have been proposed over the years to tackle this problem. The deep learning-based approaches provided promising image restoration results, but it’s purely data driven and the requirement of large datasets (paired or unpaired) for training might demean its utility for certain physical problems. Recently, physics informed image restoration techniques have gained importance due to their ability to enhance performance, infer some sense of the degradation process and its potential to quantify the uncertainty in the prediction results. In this paper, we propose a physics informed deep learning approach with simultaneous parameter estimation using 3D integral imaging and Bayesian neural network (BNN). An image-image mapping architecture is first pretrained to generate a clean image from the degraded image, which is then utilized for simultaneous training with Bayesian neural network for simultaneous parameter estimation. For the network training, simulated data using the physical model has been utilized instead of actual degraded data. The proposed approach has been tested experimentally under degradations such as low illumination and partial occlusion. The recovery results are promising despite training from a simulated dataset. We have tested the performance of the approach under varying levels of illumination condition. Additionally, the proposed approach also has been analyzed against corresponding 2D imaging-based approach. The results suggest significant improvements compared to 2D even training under similar datasets. Also, the parameter estimation results demonstrate the utility of the approach in estimating the degradation parameter in addition to image restoration under the experimental conditions considered.more » « less
-
The two-point source longitudinal resolution of three-dimensional integral imaging depends on several factors including the number of sensors, sensor pixel size, pitch between sensors, and the lens point spread function. We assume the two-point sources to be resolved if their point spread functions can be resolved in any one of the sensors. Previous studies of integral imaging longitudinal resolution either rely on geometrical optics formulation or assume the point spread function to be of sub-pixel size, thus neglecting the effect of the lens. These studies also assume both point sources to be in focus in captured elemental images. More importantly, the previous analysis does not consider the effect of noise. In this manuscript, we use the Gaussian process-based two-point source resolution criterion to overcome these limitations. We compute the circle of confusion to model the out-of-focus blurring effect. The Gaussian process-based two-point source resolution criterion allows us to study the effect of noise on the longitudinal resolution. In the absence of noise, we also present a simple analytical expression for longitudinal resolution which approximately matches the Gaussian process-based formulation. Also, we investigate the dependence of the longitudinal resolution on the parallax of the integral imaging system. We present optical experiments to validate our results. The experiments demonstrate agreement with our Gaussian process-based two-point source resolution criteria.more » « less
-
We propose polarimetric three-dimensional (3D) integral imaging profilometry and investigate its performance under degraded environmental conditions in terms of the accuracy of object depth acquisition. Integral imaging based profilometry provides depth information by capturing and utilizing multiple perspectives of the observed object. However, the performance of depth map generation may degrade due to light condition, partial occlusions, and object surface material. To improve the accuracy of depth estimation in these conditions, we propose to use polarimetric profilometry. Our experiments indicate that the proposed approach may result in more accurate depth estimation under degraded environmental conditions. We measure a number of metrics to evaluate the performance of the proposed polarimetric profilometry methods for generating the depth map under degraded conditions. Experimental results are presented to evaluate the robustness of the proposed method under degraded environment conditions and compare its performance with conventional integral imaging. To the best of our knowledge, this is the first report on polarimetric 3D integral imaging profilometry, and its performance under degraded environments.more » « less
-
Lensless devices paired with deep learning models have recently shown great promise as a novel approach to biological screening. As a first step toward performing automated lensless cell identification non-invasively, we present a field-portable, compact lensless system that can detect and classify smeared whole blood samples through layers of scattering media. In this system, light from a partially coherent laser diode propagates through the sample, which is positioned between two layers of scattering media, and the resultant opto-biological signature is captured by an image sensor. The signature is transformed via local binary pattern (LBP) transformation, and the resultant LBP images are processed by a convolutional neural network (CNN) to identify the type of red blood cells in the sample. We validated our system in an experimental setup where whole blood samples are placed between two diffusive layers of increasing thickness, and the robustness of the system against variations in the layer thickness is investigated. Several CNN models were considered (i.e., AlexNet, VGG-16, and SqueezeNet), individually optimized, and compared against a traditional learning model that consists of principal component decomposition and support vector machine (PCA + SVM). We found that a two-stage SqueezeNet architecture and VGG-16 provide the highest classification accuracy and Matthew’s correlation coefficient (MCC) score when applied to images acquired by our lensless system, with SqueezeNet outperforming the other classifiers when the thickness of the scattering layer is the same in training and test data (accuracy: 97.2%; MCC: 0.96), and VGG-16 resulting the most robust option as the thickness of the scattering layers in test data increases up to three times the value used during training. Altogether, this work provides proof-of-concept for non-invasive blood sample identification through scattering media with lensless devices using deep learning. Our system has the potential to be a viable diagnosis device because of its low cost, field portability, and high identification accuracy.more » « less
An official website of the United States government
